85 research outputs found

    All-optical spiking neurons integrated on a photonic chip

    Get PDF

    Silicon-on-insulator polarization rotating micro-ring resonator

    Get PDF
    We propose a novel micro-ring resonator which uses quasi-TE polarized light in the bus waveguide to excite the quasi-TM polarized modes in a micro-ring. An all-pass filter is demonstrated on Silicon-on-Insulator

    Advances in photonic reservoir computing on an integrated platform

    Get PDF
    Reservoir computing is a recent approach from the fields of machine learning and artificial neural networks to solve a broad class of complex classification and recognition problems such as speech and image recognition. As is typical for methods from these fields, it involves systems that were trained based on examples, instead of using an algorithmic approach. It originated as a new training technique for recurrent neural networks where the network is split in a reservoir that does the `computation' and a simple readout function. This technique has been among the state-of-the-art. So far implementations have been mainly software based, but a hardware implementation offers the promise of being low-power and fast. We previously demonstrated with simulations that a network of coupled semiconductor optical amplifiers could also be used for this purpose on a simple classification task. This paper discusses two new developments. First of all, we identified the delay in between the nodes as the most important design parameter using an amplifier reservoir on an isolated digit recognition task and show that when optimized and combined with coherence it even yields better results than classical hyperbolic tangent reservoirs. Second we will discuss the recent advances in photonic reservoir computing with the use of resonator structures such as photonic crystal cavities and ring resonators. Using a network of resonators, feedback of the output to the network, and an appropriate learning rule, periodic signals can be generated in the optical domain. With the right parameters, these resonant structures can also exhibit spiking behaviour

    Nanophotonic reservoir computing with photonic crystal cavities to generate periodic patterns

    Get PDF
    Reservoir computing (RC) is a technique in machine learning inspired by neural systems. RC has been used successfully to solve complex problems such as signal classification and signal generation. These systems are mainly implemented in software, and thereby they are limited in speed and power efficiency. Several optical and optoelectronic implementations have been demonstrated, in which the system has signals with an amplitude and phase. It is proven that these enrich the dynamics of the system, which is beneficial for the performance. In this paper, we introduce a novel optical architecture based on nanophotonic crystal cavities. This allows us to integrate many neurons on one chip, which, compared with other photonic solutions, closest resembles a classical neural network. Furthermore, the components are passive, which simplifies the design and reduces the power consumption. To assess the performance of this network, we train a photonic network to generate periodic patterns, using an alternative online learning rule called first-order reduced and corrected error. For this, we first train a classical hyperbolic tangent reservoir, but then we vary some of the properties to incorporate typical aspects of a photonics reservoir, such as the use of continuous-time versus discrete-time signals and the use of complex-valued versus real-valued signals. Then, the nanophotonic reservoir is simulated and we explore the role of relevant parameters such as the topology, the phases between the resonators, the number of nodes that are biased and the delay between the resonators. It is important that these parameters are chosen such that no strong self-oscillations occur. Finally, our results show that for a signal generation task a complex-valued, continuous-time nanophotonic reservoir outperforms a classical (i.e., discrete-time, real-valued) leaky hyperbolic tangent reservoir (normalized root-mean-square errors = 0.030 versus NRMSE = 0.127)
    corecore